Pohang
PoLaRIS Dataset: A Maritime Object Detection and Tracking Dataset in Pohang Canal
Choi, Jiwon, Cho, Dongjin, Lee, Gihyeon, Kim, Hogyun, Yang, Geonmo, Kim, Joowan, Cho, Younggun
Maritime environments often present hazardous situations due to factors such as moving ships or buoys, which become obstacles under the influence of waves. In such challenging conditions, the ability to detect and track potentially hazardous objects is critical for the safe navigation of marine robots. To address the scarcity of comprehensive datasets capturing these dynamic scenarios, we introduce a new multi-modal dataset that includes image and point-wise annotations of maritime hazards. Our dataset provides detailed ground truth for obstacle detection and tracking, including objects as small as 10$\times$10 pixels, which are crucial for maritime safety. To validate the dataset's effectiveness as a reliable benchmark, we conducted evaluations using various methodologies, including \ac{SOTA} techniques for object detection and tracking. These evaluations are expected to contribute to performance improvements, particularly in the complex maritime environment. To the best of our knowledge, this is the first dataset offering multi-modal annotations specifically tailored to maritime environments. Our dataset is available at https://sites.google.com/view/polaris-dataset.
Nonlinear Model Predictive Control with Obstacle Avoidance Constraints for Autonomous Navigation in a Canal Environment
Lee, Changyu, Chung, Dongha, Kim, Jonghwi, Kim, Jinwhan
In this paper, we describe the development process of autonomous navigation capabilities of a small cruise boat operating in a canal environment and present the results of a field experiment conducted in the Pohang Canal, South Korea. Nonlinear model predictive control (NMPC) was used for the online trajectory planning and tracking control of the cruise boat in a narrow passage in the canal. To consider the nonlinear characteristics of boat dynamics, system identification was performed using experimental data from various test maneuvers, such as acceleration-deceleration and zigzag trials. To efficiently represent the obstacle structures in the canal environment, we parameterized the canal walls as line segments with point cloud data, captured by an onboard LiDAR sensor, and considered them as constraints for obstacle avoidance. The proposed method was implemented in a single NMPC layer, and its real-world performance was verified through experimental runs in the Pohang Canal.
Pohang Canal Dataset: A Multimodal Maritime Dataset for Autonomous Navigation in Restricted Waters
Chung, Dongha, Kim, Jonghwi, Lee, Changyu, Kim, Jinwhan
This paper presents a multimodal maritime dataset and the data collection procedure used to gather it, which aims to facilitate autonomous navigation in restricted water environments. The dataset comprises measurements obtained using various perception and navigation sensors, including a stereo camera, an infrared camera, an omnidirectional camera, three LiDARs, a marine radar, a global positioning system, and an attitude heading reference system. The data were collected along a 7.5-km-long route that includes a narrow canal, inner and outer ports, and near-coastal areas in Pohang, South Korea. The collection was conducted under diverse weather and visual conditions. The dataset and its detailed description are available for free download at https://sites.google.com/view/pohang-canal-dataset.
Machine Learning Increased Accuracy of Anti-Cancer Drug Response Predictions
Researchers from the Pohang University of Science and Technology (POSTECH) in South Korea say they have successfully increased the accuracy of anti-cancer drug response predictions by using data closest to a human being's response. The team developed this machine learning technique through algorithms that learn transcriptome information from artificial organoids derived from actual patients instead of animal models. The team, led by Sanguk Kim, PhD, in the life sciences department, published its findings "Network-based machine learning in colorectal and bladder organoid models predicts anti-cancer drug efficacy in patients" in Nature Communications "Cancer patient classification using predictive biomarkers for anti-cancer drug responses is essential for improving therapeutic outcomes. However, current machine-learning-based predictions of drug response often fail to identify robust translational biomarkers from preclinical models. Here, we present a machine-learning framework to identify robust drug biomarkers by taking advantage of network-based analyses using pharmacogenomic data derived from three-dimensional organoid culture models," write the investigators. "The biomarkers identified by our approach accurately predict the drug responses of 114 colorectal cancer patients treated with 5-fluorouracil and 77 bladder cancer patients treated with cisplatin.
Machine-Learning Tool Improves Cancer Drug Response Prediction
Applying a machine learning technique using algorithms that learn transcriptome information from artificial organoids derived from actual patients instead of animal models, researchers from the Pohang University of Science and Technology (POSTECH) in South Korea say they have successfully increased the accuracy of anti-cancer drug response predictions. The team, led by Sanguk Kim, Ph.D., in the life sciences department, published its findings "Network-based machine learning in colorectal and bladder organoid models predicts anti-cancer drug efficacy in patients" in Nature Communications "Cancer patient classification using predictive biomarkers for anti-cancer drug responses is essential for improving therapeutic outcomes. However, current machine-learning-based predictions of drug response often fail to identify robust translational biomarkers from preclinical models. Here, we present a machine-learning framework to identify robust drug biomarkers by taking advantage of network-based analyses using pharmacogenomic data derived from three-dimensional organoid culture models," write the investigators. "The biomarkers identified by our approach accurately predict the drug responses of 114 colorectal cancer patients treated with 5-fluorouracil and 77 bladder cancer patients treated with cisplatin. We further confirm our biomarkers using external transcriptomic datasets of drug-sensitive and -resistant isogenic cancer cell lines.
AI Generator Learns to 'Draw' Like Cartoonist Lee Mal-Nyeon in Just 10 Hours
A Seoul National University Master's student and developer has trained a face generating model to transfer normal face photographs into cartoon images in the distinctive style of Lee Mal-nyeon. The student (GitHub user name: bryandlee) used webcomics images by South Korean cartoonist Lee Mal-nyeon (์ด๋ง๋ ) as input data, building a dataset of malnyun cartoon faces then testing popular deep generative models on it. By combining a pretrained face generating model with special training techniques, they were able to train a generator at 256 256 resolution in just 10 hours on a single RTX 2080ti GPU, using only 500 manually annotated images. Since the cascade classifier for human faces provided in OpenCV-- a library of programming functions mainly aimed at real-time computer vision -- did not work well on the cartoon domain, the student manually annotated 500 input cartoon face images. The student incorporated FreezeD, a simple yet effective baseline for transfer learning of GANs proposed earlier this year by KAIST (Korea Advanced Institute of Science and Technology) and POSTECH ( Pohang University of Science and Technology) researchers to reduce the burden of heavy data and computational resources when training GANs. The developer tested the idea of freezing the early layers of the generator in transfer learning settings on the proposed FreezeG (freezing generator) and found that "it worked pretty well."
InstaGAN Excels in Instance-Aware Image-To-Image Translation
Researchers at the Korea Advanced Institute of Science and Technology and Pohang University of Science and Technology have introduced a machine learning algorithm system, InstaGAN, which can perform multiple instance-aware image-to-image translation tasks -- such as replacing sheep in photos with giraffes -- on multiple image datasets. The paper InstaGAN: Instance-Aware Image-to-Image Translation has been accepted by the respected International Conference on Learning Representations (ICLR) 2019, which will take place this May in New Orleans, USA. An image-to-image translation system is a system that learns to map an input image onto an output image. Unsupervised image-to-image translation has garnered considerable research attention recently in part due to the rapid development of generative adversarial networks (GANs) that now power the technique. Previous methods were not suitable for challenging tasks, for example if the image has multiple target instances or if the translation task involves challenging shapes.
Finally, a Neural Network Can Turn a Bunch of Sheep Into Some Decent Giraffes
Using machine learning to swap out one image for another isn't new, but ask these systems to do something a bit more complicated than a basic faceswap, and you can end up with underwhelming or laughable results. Now, researchers say they have developed a technique that produces far more convincing images for complex tasks, such as swapping out skirts for pants, cups for bottles, or a herd of sheep for a mess of giraffes. Researchers at the Korea Advanced Institute of Science and Technology and the Pohang University of Science and Technology say their technique, InstaGan, stands out for its ability to tackle changes in shape and altering multiple image elements at once. The method, based on generative adversarial networks (GANs), processes instance information, such as the ability to identify objects and boundaries in images. In their paper, the researchers write that they think they are the first to develop such a technique on more complex images "to the best of our knowledge."
"Artificial Synapses" Could Let Supercomputers Mimic the Human Brain
Large-scale brain-like machines with human-like abilities to solve problems could become a reality, now that researchers have invented microscopic gadgets that mimic the connections between neurons in the human brain better than any previous devices. The new research could lead to better robots, self-driving cars, data mining, medical diagnosis, stock-trading analysis and "other smart human-interactive systems and machines in the future," said Tae-Woo Lee, a materials scientistat the Pohang University of Science and Technology in Korea and senior author of the study. The human brain's enormous computing power stems from its connections. Previous research suggested that the brain has approximately 100 billion neurons and roughly 1 quadrillion (1 million billion) connections wiring these cells together. At each of these connections, or synapses, a neuron typically fires about 10 times per second.
New 'Artificial Synapses' Could Let Supercomputers Mimic the Human Brain
Large-scale brain-like machines with human-like abilities to solve problems could become a reality, now that researchers have invented microscopic gadgets that mimic the connections between neurons in the human brain better than any previous devices. The new research could lead to better robots, self-driving cars, data mining, medical diagnosis, stock-trading analysis and "other smart human-interactive systems and machines in the future," said Tae-Woo Lee, a materials scientistat the Pohang University of Science and Technology in Korea and senior author of the study. The human brain's enormous computing power stems from its connections. Previous research suggested that the brain has approximately 100 billion neurons and roughly 1 quadrillion (1 million billion) connections wiring these cells together. At each of these connections, or synapses, a neuron typically fires about 10 times per second.